WSEAS Transactions on Computers


Print ISSN: 1109-2750
E-ISSN: 2224-2872

Volume 17, 2018

Notice: As of 2014 and for the forthcoming years, the publication frequency/periodicity of WSEAS Journals is adapted to the 'continuously updated' model. What this means is that instead of being separated into issues, new papers will be added on a continuous basis, allowing a more regular flow and shorter publication times. The papers will appear in reverse order, therefore the most recent one will be on top.



Shots Temporal Prediction Rules for High-Dimensional Data of Semantic Video Retrieval

AUTHORS: Shaimaa Toriah, Atef Ghalwash, Aliaa Youssef

Download as PDF

ABSTRACT: Some research in Semantic video retrieval is concerned with predicting the temporal existence of certain concepts. Most of the used methods in those studies depend on rules defined by experts and use ground-truth annotation. The Ground-truth annotation is time consuming and labour intensive. Additionally, it involves a limited number of annotated concepts, and a limited number of annotated shots . Video concepts have interrelated relations, so the extracted temporal rules from ground-truth annotation are often inaccurate and incomplete. However concept detections scores are a large high-dimensional continuous valued dataset, and generated automatically. Temporal association rules algorithms are efficient methods in revealing temporal relations, but they have some limitations when applied on high-dimensional and continuous-valued data. These constraints have led to a lack of research used temporal association rules. So, we propose a novel framework to encode the high-dimensional continuousvalued concept detection scores data into a single stream of characters without loss of important information and to predict a temporal shot behavior by generating temporal association rules

KEYWORDS: Semantic Video Retrieval, Temporal Association Rules, Principle Component Analysis, Guassian Mixture Model Clustering, Expectation Maximization Algorithm, Sequential Pattern Discovery Algorithm

REFERENCES:

[1] Muhammad Nabeel Asghar, Fiaz Hussain, and Rob Manton. Video indexing: a survey. framework, 3(01), 2014.

[2] Lamberto Ballan, Marco Bertini, Alberto Del Bimbo, and Giuseppe Serra. Video annotation and retrieval using ontologies and rule learning. IEEE MultiMedia, 17(4):80–88, 2010.

[3] R Bellman. Dynamic programming princeton university press princeton. New Jersey Google Scholar, 1957.

[4] Berge, Laurent and Bouveyron, Charles and Gi- ´ rard, Stephane. Hdclassif: An r package for ´ model-based clustering and discriminant analysis of high-dimensional data. Journal of Statistical Software, 46(6):1–29, 2012.

[5] Shripad A Bhat, Omkar V Sardessai, Preetesh P Kunde, and Sarvesh S Shirodkar. Overview of existing content based video retrieval systems. International Journal of Advanced Engineering and Global Technology, 2, 2014.

[6] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.

[7] Geng, Jie and Miao, Zhenjiang and Chi, Hai. Temporal-Spatial refinements for video concept fusion. In Asian Conference on Computer Vision, pages 547–559. Springer, 2012.

[8] Alexander Hauptmann, Rong Yan, and Wei-Hao Lin. How many high-level concepts will fill the semantic gap in news video retrieval? In Proceedings of the 6th ACM international conference on Image and video retrieval, pages 627– 634. ACM, 2007.

[9] Alexander Hauptmann, Rong Yan, Wei-Hao Lin, Michael Christel, and Howard Wactlar. Can high-level concepts fill the semantic gap in video retrieval? a case study with broadcast news. IEEE transactions on multimedia, 9(5):958– 966, 2007.

[10] Yu-Gang Jiang, Qi Dai, Jun Wang, Chong-Wah Ngo, Xiangyang Xue, and Shih-Fu Chang. Fast semantic diffusion for large-scale context-based image and video annotation. IEEE Transactions on Image Processing, 21(6):3080–3091, 2012.

[11] Jiang, Yu-Gang and Yanagawa, Akira and Chang, Shih-Fu and Ngo, Chong-Wah. CU-VIREO374: Fusing Columbia374 and VIREO374 for large scale semantic concept detection. Columbia University ADVENT Technical Report# 223–2008–1, 2008.

[12] Lin Lin, Mei-Ling Shyu, and Shu-Ching Chen. Association rule mining with a correlation-based interestingness measure for video semantic concept detection. International Journal of Information and Decision Sciences, 4(2-3):199–216, 2012.

[13] Ken-Hao Liu, Ming-Fang Weng, Chi-Yao Tseng, Yung-Yu Chuang, and Ming-Syan Chen. Association and temporal rule mining for post- filtering of semantic concept detection in video. IEEE Transactions on Multimedia, 10(2):240– 251, 2008.

[14] Milind Naphade, John R Smith, Jelena Tesic, Shih-Fu Chang, Winston Hsu, Lyndon Kennedy, Alexander Hauptmann, and Jon Curtis. Largescale concept ontology for multimedia. IEEE multimedia, 13(3):86–91, 2006.

[15] John Rice. Mathematical statistics and data analysis. Nelson Education, 2006.

[16] R Core Team. R: A language and environment for statistical computing. r foundation for statistical computing, vienna, austria. 2013, 2014.

[17] Xiao-Yong Wei, Chong-Wah Ngo, and Yu-Gang Jiang. Selection of concept detectors for video search by ontology-enriched semantic spaces. IEEE Transactions on Multimedia, 10(6):1085– 1096, 2008.

[18] Jun Yang and Alexander G Hauptmann. Exploring temporal consistency for video analysis and retrieval. In Proceedings of the 8th ACM international workshop on Multimedia information retrieval, pages 33–42. ACM, 2006.

[19] Mohammed J Zaki. SPADE: An efficient algorithm for mining frequent sequences. Machine learning, 42(1-2):31–60, 2001.

WSEAS Transactions on Computers, ISSN / E-ISSN: 1109-2750 / 2224-2872, Volume 17, 2018, Art. #20, pp. 163-172


Copyright © 2018 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution License 4.0

Bulletin Board

Currently:

The editorial board is accepting papers.


WSEAS Main Site